Skip to content

Conversation

@Zblocker64
Copy link
Collaborator

@Zblocker64 Zblocker64 commented Sep 11, 2025

Description

Closes: #XXXX

This PR fixes critical pagination bugs in the market query handlers that were causing providers to see duplicate orders during catchup operations. The issues were:

  1. Pagination key reset bug: The req.Pagination.Key was being reset to nil for subsequent state iterations (idx > 0), causing pagination to restart from the beginning instead of continuing sequentially.

  2. Base64 encoding/decoding mismatch: CLI was passing Base64-encoded pagination keys, but the server expected binary data, leading to checksum validation failures.

  3. SearchPrefix corruption: The raw Cosmos SDK nextKey was being corrupted by prepending searchPrefix, making it incompatible with FilteredPaginate on subsequent requests.

Key Changes:

  • x/market/keeper/grpc_query.go:
    • Fixed pagination key reset logic: if idx > 0 && !hasPaginationKey
    • Added Base64 detection and decoding for pagination keys
    • Removed searchPrefix prepending that corrupted raw Cosmos SDK nextKeys
    • Applied fixes to Orders, Bids, and Leases query handlers

Testing:

  • Verified sequential pagination works: first query returns orders 1-2, second query returns orders 3-4
  • Confirmed no more "invalid checksum" errors
  • Tested multi-state pagination across OrderOpen, OrderActive, OrderClosed states

Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow-up issues.

I have...

  • included the correct type prefix in the PR title
  • added ! to the type prefix if API or client breaking change
  • targeted the correct branch (see PR Targeting)
  • provided a link to the relevant issue or specification
  • included the necessary unit and integration tests
  • added a changelog entry to CHANGELOG.md
  • included comments for documenting Go code
  • updated the relevant documentation or specification
  • reviewed "Files changed" and left comments if necessary
  • confirmed all CI checks have passed

@Zblocker64 Zblocker64 requested a review from a team as a code owner September 11, 2025 12:41
@coderabbitai
Copy link

coderabbitai bot commented Sep 11, 2025

Warning

Rate limit exceeded

@Zblocker64 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 12 minutes and 4 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between fa46bbe and 595cf81.

📒 Files selected for processing (1)
  • x/market/keeper/grpc_query.go (8 hunks)

Walkthrough

Adds support for accepting raw or Base64-encoded pagination keys for Orders, Bids, and Leases queries; attempts raw decode, then Base64-decode+decode, mapping decode errors to appropriate gRPC codes. Preserves SDK NextKey bytes (no prefixing), introduces hasPaginationKey flow, adjusts reverse-search flag for Bids, and removes an isBase64String helper. No public API changes.

Changes

Cohort / File(s) Summary
Market gRPC pagination changes
x/market/keeper/grpc_query.go
- Accepts both raw and Base64-encoded req.Pagination.Key: tries DecodePaginationKey on raw key, if that fails attempts base64.StdEncoding.DecodeString then DecodePaginationKey. - Maps invalid base64 errors to gRPC InvalidArgument; other decode errors → Internal. - Introduces hasPaginationKey boolean and switch-based control flow to distinguish keyed vs non-keyed requests. - Preserves raw pageRes.NextKey from the SDK (removes prior searchPrefix prefixing). - Adjusts multi-state pagination: only clears req.Pagination.Key between states when client didn't supply a key. - Bids: use unsolicited[0] to determine reverse-search trigger. - Leases: same Base64 fallback and raw NextKey preservation. - Adds imports encoding/base64 and errors; removes isBase64String helper. - No exported/public signature changes.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant Keeper as MarketKeeper
  participant SDK as CosmosSDK

  rect rgb(250,250,255)
  Client->>Keeper: ListOrders/ListBids/ListLeases with Pagination.Key (raw or base64)
  end

  rect rgb(245,255,240)
  Keeper->>Keeper: if Pagination.Key present\n-> try DecodePaginationKey(raw)\n-> if fail: try base64.Decode -> DecodePaginationKey(decoded)\n-> set hasPaginationKey flag accordingly
  end

  rect rgb(255,250,240)
  Keeper->>SDK: perform paginated store query using raw Pagination.Key (if set)
  SDK-->>Keeper: results + pageRes.NextKey (raw bytes)
  end

  rect rgb(240,255,255)
  Keeper->>Client: return results and raw NextKey (no prefixing)
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

"I hopped through bytes and keys today,
Tried raw then Base64 on the way.
NextKey stays as the SDK sends,
Across states my trail extends.
— a rabbit, nibbling bytes 🐇"

Pre-merge checks (3 passed)

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed The title concisely and accurately reflects the primary changes (pagination key handling and Base64 decoding) made in the x/market query handlers and related CLI changes, so it is directly related to the changeset; it is clear for a teammate reviewing history, though the repeated "fix" after the conventional-commit prefix is redundant.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
Description Check ✅ Passed The PR description directly describes fixes for pagination key handling, Base64 decoding of pagination keys, and removal of searchPrefix corruption in x/market/keeper/grpc_query.go, aligning with the provided change summary and PR objectives. It names affected areas (Orders, Bids, Leases), explains the root causes, and summarizes tests verifying sequential and multi-state pagination. Because the description is clearly related to the changeset and not off-topic, it satisfies the lenient description check.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch pagination-fix

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
x/market/keeper/grpc_query.go (1)

200-207: Fix out-of-bounds index in x/market/keeper/grpc_query.go

len(unsolicited) is validated to 1; indexing unsolicited[1] will panic — use unsolicited[0].

-        if unsolicited[1] == 1 {
+        if unsolicited[0] == 1 {
             reverseSearch = true
         }
🧹 Nitpick comments (2)
x/market/keeper/grpc_query.go (2)

58-66: Base64 detection is heuristic; prefer a decode-and-fallback strategy (and reuse across endpoints).

isBase64String can misclassify raw binary that happens to be base64-safe. Instead, try raw DecodePaginationKey first; if it’s invalid, attempt base64 decode and retry. This removes the need for the heuristic and makes behavior consistent. Also apply the same handling in Bids and Leases for client compatibility.

-        // Handle Base64 encoded pagination keys
-        paginationKeyBytes := req.Pagination.Key
-        if isBase64String(req.Pagination.Key) {
-            paginationKeyBytes, err = base64.StdEncoding.DecodeString(string(req.Pagination.Key))
-            if err != nil {
-                return nil, status.Error(codes.InvalidArgument, "invalid base64 pagination key")
-            }
-        }
-
-        states, searchPrefix, key, _, err = query.DecodePaginationKey(paginationKeyBytes)
+        // Accept both raw and base64-encoded keys: try raw first, then base64.
+        paginationKeyBytes := req.Pagination.Key
+        states, searchPrefix, key, _, err = query.DecodePaginationKey(paginationKeyBytes)
+        if err != nil {
+            if decoded, decErr := base64.StdEncoding.DecodeString(string(req.Pagination.Key)); decErr == nil {
+                states, searchPrefix, key, _, err = query.DecodePaginationKey(decoded)
+            }
+        }

Also applies to: 562-573


67-70: Return InvalidArgument for bad pagination keys; don’t surface client errors as Internal.

Map ErrInvalidPaginationKey to codes.InvalidArgument. This improves API semantics and debuggability.

 import (
     "context"
+    "errors"
     "encoding/base64"
-        if err != nil {
-            return nil, status.Error(codes.Internal, err.Error())
-        }
+        if err != nil {
+            if errors.Is(err, query.ErrInvalidPaginationKey) {
+                return nil, status.Error(codes.InvalidArgument, err.Error())
+            }
+            return nil, status.Error(codes.Internal, err.Error())
+        }

Also applies to: 3-9

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8dcc9fc and 30092df.

📒 Files selected for processing (1)
  • x/market/keeper/grpc_query.go (6 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
x/market/keeper/grpc_query.go (1)
util/query/pagination.go (1)
  • DecodePaginationKey (16-89)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: lint
  • GitHub Check: release-dry-run
  • GitHub Check: coverage
  • GitHub Check: build-bins
  • GitHub Check: tests
🔇 Additional comments (3)
x/market/keeper/grpc_query.go (3)

140-141: LGTM: stop prefix-prepending NextKey.

Keeping the raw Cosmos SDK NextKey avoids corruption and maintains compatibility with subsequent FilteredPaginate calls.


287-288: LGTM: same fix applied to Bids.

Not prepending searchPrefix to NextKey here is correct and consistent with Orders.


5-5: Import added appropriately.

encoding/base64 is required for page-key handling.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
x/market/keeper/grpc_query.go (1)

101-103: Always clear key when advancing to the next state to recompute searchPrefix.

Gating on hasPaginationKey stalls cross-state iteration and can cause duplicates; next state reuses previous prefix. Clear unconditionally.

Apply this diff:

-        if idx > 0 && !hasPaginationKey {
-            req.Pagination.Key = nil
-        }
+        if idx > 0 {
+            req.Pagination.Key = nil
+        }
🧹 Nitpick comments (3)
x/market/keeper/grpc_query.go (3)

53-67: Base64 fallback: also accept raw/URL-safe variants and reduce duplication.

Some clients emit RawStd or URL-safe Base64. Broaden acceptance and reuse across Orders/Bids/Leases.

Apply this diff to broaden decoding here (repeat same pattern in Bids/Leases decode blocks):

-        if err != nil {
-            if decoded, decErr := base64.StdEncoding.DecodeString(string(req.Pagination.Key)); decErr == nil {
-                states, searchPrefix, key, _, err = query.DecodePaginationKey(decoded)
-            }
-        }
+        if err != nil {
+            // Try multiple Base64 alphabets (std/raw/url).
+            decoders := []*base64.Encoding{
+                base64.StdEncoding, base64.RawStdEncoding,
+                base64.URLEncoding, base64.RawURLEncoding,
+            }
+            for _, enc := range decoders {
+                if decoded, decErr := enc.DecodeString(string(req.Pagination.Key)); decErr == nil {
+                    states, searchPrefix, key, _, err = query.DecodePaginationKey(decoded)
+                    if err == nil {
+                        break
+                    }
+                }
+            }
+        }

197-205: Broaden Base64 handling here too (mirror Orders).

Adopt the multi-decoder fallback to accept std/raw/url Base64.

Apply this diff:

-        if err != nil {
-            if decoded, decErr := base64.StdEncoding.DecodeString(string(req.Pagination.Key)); decErr == nil {
-                states, searchPrefix, key, unsolicited, err = query.DecodePaginationKey(decoded)
-            }
-        }
+        if err != nil {
+            decoders := []*base64.Encoding{
+                base64.StdEncoding, base64.RawStdEncoding,
+                base64.URLEncoding, base64.RawURLEncoding,
+            }
+            for _, enc := range decoders {
+                if decoded, decErr := enc.DecodeString(string(req.Pagination.Key)); decErr == nil {
+                    states, searchPrefix, key, unsolicited, err = query.DecodePaginationKey(decoded)
+                    if err == nil {
+                        break
+                    }
+                }
+            }
+        }

363-370: Mirror Base64 multi-decoder fallback in Leases.

Keep parity with Orders/Bids for client compatibility.

Apply this diff:

-        if err != nil {
-            if decoded, decErr := base64.StdEncoding.DecodeString(string(req.Pagination.Key)); decErr == nil {
-                states, searchPrefix, key, unsolicited, err = query.DecodePaginationKey(decoded)
-            }
-        }
+        if err != nil {
+            decoders := []*base64.Encoding{
+                base64.StdEncoding, base64.RawStdEncoding,
+                base64.URLEncoding, base64.RawURLEncoding,
+            }
+            for _, enc := range decoders {
+                if decoded, decErr := enc.DecodeString(string(req.Pagination.Key)); decErr == nil {
+                    states, searchPrefix, key, unsolicited, err = query.DecodePaginationKey(decoded)
+                    if err == nil {
+                        break
+                    }
+                }
+            }
+        }
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 426eb82 and fa46bbe.

📒 Files selected for processing (1)
  • x/market/keeper/grpc_query.go (8 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
x/market/keeper/grpc_query.go (1)
util/query/pagination.go (2)
  • DecodePaginationKey (16-89)
  • ErrInvalidPaginationKey (12-12)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: lint
  • GitHub Check: release-dry-run
  • GitHub Check: coverage
  • GitHub Check: build-macos
  • GitHub Check: tests
  • GitHub Check: build-bins
🔇 Additional comments (5)
x/market/keeper/grpc_query.go (5)

5-6: LGTM on new imports.

Required for Base64 fallback and error checks.


142-142: LGTM: Keep raw SDK NextKey (no prefixing).

Prevents key corruption across page boundaries.


218-221: LGTM: reverseSearch flag carried via unsolicited[0].

Consistent with Encode/Decode contract.


246-248: LGTM: clear key on state transitions.

Prevents stale prefix reuse during multi-state scans.


411-413: LGTM: clear key across state buckets.

Ensures correct prefix recomputation per state.

@Zblocker64 Zblocker64 enabled auto-merge (squash) September 13, 2025 16:13
@Zblocker64
Copy link
Collaborator Author

closing due to new PR with a new ReadPageRequest function

@Zblocker64 Zblocker64 closed this Sep 15, 2025
auto-merge was automatically disabled September 15, 2025 13:36

Pull request was closed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants